835 research outputs found

    Brokerage Intermediation in the Commercial Property Market

    Get PDF
    This study is one of the first to investigate brokerage intermediation effects in the income producing commercial property market. Employing multifamily sales data from the Atlanta and Phoenix markets under alternative brokerage specifications, little evidence to support the existence of systematic, differential transaction pricing outcomes due to the presence of brokers is found. The results suggest that the existence of brokerage intermediation effects is likely minimal in commercial markets that are relatively transparent, that have participants who are knowledgeable, and where value and price are typically determined based on a property’s income generating capacity.

    Dual Adaptive Transformations for Weakly Supervised Point Cloud Segmentation

    Full text link
    Weakly supervised point cloud segmentation, i.e. semantically segmenting a point cloud with only a few labeled points in the whole 3D scene, is highly desirable due to the heavy burden of collecting abundant dense annotations for the model training. However, existing methods remain challenging to accurately segment 3D point clouds since limited annotated data may lead to insufficient guidance for label propagation to unlabeled data. Considering the smoothness-based methods have achieved promising progress, in this paper, we advocate applying the consistency constraint under various perturbations to effectively regularize unlabeled 3D points. Specifically, we propose a novel DAT (\textbf{D}ual \textbf{A}daptive \textbf{T}ransformations) model for weakly supervised point cloud segmentation, where the dual adaptive transformations are performed via an adversarial strategy at both point-level and region-level, aiming at enforcing the local and structural smoothness constraints on 3D point clouds. We evaluate our proposed DAT model with two popular backbones on the large-scale S3DIS and ScanNet-V2 datasets. Extensive experiments demonstrate that our model can effectively leverage the unlabeled 3D points and achieve significant performance gains on both datasets, setting new state-of-the-art performance for weakly supervised point cloud segmentation.Comment: ECCV 202

    Cardiovascular health status in Chinese adults in urban areas: Analysis of the Chinese Health Examination Database 2010

    Get PDF
    Background: The American Heart Association (AHA) recently developed definitions of cardiovascular health for adults and children based on 7 cardiovascular disease risk factors or health behaviors. We applied this new construct to examine the cardiovascular health status in adult Chinese urban residents. Methods: Data of 1,012,418 subjects aged 20–65 years (55% were men; mean age, 42.4 years) who received health examination at 58 health examination centers across China was analyzed. The AHA ideal health behaviors index and ideal health factor index were evaluated among the subjects. Results: Only 0.6% of male and 2.6% of female subjects met all 7 health components, and only 39.1% of the subjects met 5 or more components of ideal cardiovascular health. The prevalence of “ideal”, “intermediate” and “poor” cardiovascular health was 1.5%, 33.9% and 64.6%, respectively. Conclusion: About two-thirds of the adult Chinese urban population has “poor” cardiovascular health. Comprehensive individual and population-based interventions must be developed to improve cardiovascular health status in China

    HyperTime: Hyperparameter Optimization for Combating Temporal Distribution Shifts

    Full text link
    In this work, we propose a hyperparameter optimization method named \emph{HyperTime} to find hyperparameters robust to potential temporal distribution shifts in the unseen test data. Our work is motivated by an important observation that it is, in many cases, possible to achieve temporally robust predictive performance via hyperparameter optimization. Based on this observation, we leverage the `worst-case-oriented' philosophy from the robust optimization literature to help find such robust hyperparameter configurations. HyperTime imposes a lexicographic priority order on average validation loss and worst-case validation loss over chronological validation sets. We perform a theoretical analysis on the upper bound of the expected test loss, which reveals the unique advantages of our approach. We also demonstrate the strong empirical performance of the proposed method on multiple machine learning tasks with temporal distribution shifts.Comment: 19 pages, 7 figure
    corecore